Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Jan Leike"


23 mentions found


A Safety Check for OpenAI
  + stars: | 2024-05-20 | by ( Andrew Ross Sorkin | Ravi Mattu | Bernhard Warner | ) www.nytimes.com   time to read: +1 min
OpenAI’s fear factorThe tech world’s collective eyebrows rose last week when Ilya Sutskever, the OpenAI co-founder who briefly led a rebellion against Sam Altman, resigned as chief scientist. “Safety culture and processes have taken a backseat to shiny products,” Jan Leike, who resigned from OpenAI last week, wrote on the social network X. Along with Sutskever, Leike oversaw the company’s so-called superalignment team, which was tasked with making sure products didn’t become a threat to humanity. Sutskever said in his departing note that he was confident OpenAI would build artificial general intelligence — A.I. Leike spoke for many safety-first OpenAI employees, according to Vox.
Persons: Ilya Sutskever, Sam Altman, hadn’t, ” Jan Leike, Sutskever, Leike, , Vox, Daniel Kokotajlo, Altman Organizations: OpenAI, C.E.O
New York CNN —OpenAI says it’s hitting the pause button on a synthetic voice released with an update to ChatGPT that prompted comparisons with a fictional voice assistant portrayed in the quasi-dystopian film “Her” by actor Scarlett Johansson. “We’ve heard questions about how we chose the voices in ChatGPT, especially Sky,” OpenAI said in a post on X Monday. A spokesperson for the company said that structure would help OpenAI better achieve its safety objectives. OpenAI President Greg Brockman responded in a longer post on Saturday, which was signed with both his name and Altman’s, laying out the company’s approach to long-term AI safety. “We have raised awareness of the risks and opportunities of AGI so that the world can better prepare for it,” Brockman said.
Persons: New York CNN — OpenAI, Scarlett Johansson, OpenAI, “ We’ve, ” OpenAI, , Desi Lydic, , ” Lydic, Joaquin Phoenix, Everett, Sam Altman, Johansson, Jan Leike, Ilya Sutskever, Altman, Leike, Greg Brockman, ” Brockman Organizations: New, New York CNN, Daily, Warner Bros ., White, CNN Locations: New York, ChatGPT, OpenAI
AdvertisementIt's a rare admission from Altman, who has worked hard to cultivate an image of being relatively calm amid OpenAI's ongoing chaos. Safety team implosionOpenAI has been in full damage control mode following the exit of key employees working on AI safety. He said the safety team was left "struggling for compute, and it was getting harder and harder to get this crucial research done." Silenced employeesThe implosion of the safety team is a blow for Altman, who has been keen to show he's safety-conscious when it comes to developing super-intelligent AI. The usually reserved Altman even appeared to shade Google, which demoed new AI products the following day.
Persons: , Jan Leike, Ilya Sutskever, Sam Altman, Altman, Leike, Leopold Aschenbrenner, Pavel Izmailov, Daniel Kokotajlo, William Saunders, Cullen O'Keefe, Kokotajlo, Vox, OpenAI, Joe Rogan's, Neel Nanda, i've, Scarlett Johansson, OpenAI didn't Organizations: Service, Business, AGI
OpenAI's exit agreements had nondisparagement clauses threatening vested equity, Vox reported. Sam Altman said on X that the company never enforced it, and that he was unaware of the provision. Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . AdvertisementOpenAI employees who left the company without signing a non-disparagement agreement could have lost vested equity if they did not comply — but the policy was never used, CEO Sam Altman said on Saturday.
Persons: Vox, Sam Altman, , Superalignment, Jan Leike, Ilya Sutskever Organizations: Service, Vox News, Business
OpenAI's Ilya Sutskever and Jan Leike, who led a team focused on AI safety, resigned. Founders Sam Altman and Greg Brockman are now scrambling to reassure everyone. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . AdvertisementTwo of OpenAI's founders, CEO Sam Altman and President Greg Brockman, are on the defensive after a shake-up in the company's safety department this week. Sutskever and Leike led OpenAI's super alignment team, which was focused on developing AI systems compatible with human interests.
Persons: OpenAI's Ilya Sutskever, Jan Leike, Sam Altman, Greg Brockman, , Ilya Sutskever, Leike Organizations: Service, Business
A top OpenAI executive researching safety quit on Tuesday. Adding that Sam Altman's company was prioritizing "shiny products" over safety. AdvertisementA former top safety executive at OpenAI is laying it all out. "Over the past years, safety culture and processes have taken a backseat to shiny products," Leike wrote in a lengthy thread on X on Friday. This story is available exclusively to Business Insider subscribers.
Persons: Jan Leike, Sam Altman's, , Leike, OpenAI Organizations: Service, Business
OpenAI's Superalignment team was formed in July 2023 to mitigate AI risks, like "rogue" behavior. OpenAI has reportedly disbanded its Superalignment team after its co-leaders resigned. AdvertisementIn the same week that OpenAI launched GPT-4o, its most human-like AI yet, the company dissolved its Superalignment team, Wired first reported. OpenAI created its Superalignment team in July 2023, co-led by Ilya Sutskever and Jan Leike. The team was dedicated to mitigating AI risks, such as the possibility of it "going rogue."
Persons: OpenAI's, OpenAI, , Ilya Sutskever, Jan Leike, Sutskever Organizations: Service, Wired, Business
New York CNN —A departing OpenAI executive focused on safety is raising concerns about the company on his way out the door. His resignation followed an announcement by OpenAI Co-Founder and Chief Scientist Ilya Sutskever, who also helped lead the superalignment team, on Tuesday that he would leave the company. The technology will make ChatGPT more like a digital personal assistant, capable of real-time spoken conversations. “i’m super appreciative of @janleike’s contributions to openai’s alignment research and safety culture, and very sad to see him leave,” Altman said. i’ll have a longer post in the next couple of days.”–CNN’s Samantha Delouya contributed to this report.
Persons: Jan Leike, superalignment, OpenAI, , Leike, , Ilya Sutskever, Sutskever, Sam Altman, Altman, Kara Swisher, ” Leike, ” Altman, ” –, Samantha Delouya Organizations: New, New York CNN, OpenAI, CNN Locations: New York, ChatGPT
The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. OpenAI's Superalignment team, announced last year, has focused on "scientific and technical breakthroughs to steer and control AI systems much smarter than us." "I joined because I thought OpenAI would be the best place in the world to do this research," Leike wrote on X. Leike wrote that he believes much more of the company's bandwidth should be focused on security, monitoring, preparedness, safety and societal impact. The update brings the GPT-4 model to everyone, including OpenAI's free users, technology chief Mira Murati said Monday in a livestreamed event.
Persons: Sam Altman, OpenAI, Ilya Sutskever, Jan Leike, OpenAI's, Leike, Altman, Sutskever, Helen Toner, Tasha McCauley, Adam D'Angelo, Ilya, Jakub Pachocki, Mira Murati, Murati Organizations: OpenAI, Hope, CNBC, Microsoft, Wired, Tuesday, Wall Street Locations: Atlanta, Leike, OpenAI
Jan Leike, the co-lead of OpenAI's superalignment group, announced his resignation on Tuesday. Leike's exit follows the departure of Ilya Sutskever, OpenAI cofounder and chief scientist. Leike co-led OpenAi's superalignment group, a team that focuses on making its artificial intelligence systems align with human interests. Leike announced his departure hours after Ilya Sutskever, the other superalignment leader, said he was exiting. In a post on X, OpenAI cofounder Sam Altman said, "Ilya and OpenAI are going to part ways.
Persons: Jan Leike, OpenAI's, Ilya Sutskever, , shakeup, Leike, OpenAi's, OpenAI, Sutskever, Sutskever's, Sam Altman, Ilya, Altman, Diane Yoon, Chris Clark, Yoon, Clark, Leopold Aschenbrenner, Pavel Izmailov, Daniel Kokotajlo, William Saunders Organizations: Service, Business Locations: OpenAI
And the fact that there aren't such controls in place yet is a problem OpenAI recognized, per its July 2023 post. Our current techniques for aligning AI, such as reinforcement learning from human feedback , rely on humans' ability to supervise AI," read OpenAI's post. "But humans won't be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. AdvertisementLeike — who worked at Google's DeepMind before his gig at OpenAI — had big aspirations for keeping humans safe from the superintelligence we've created. "Maybe a once-and-for-all solution to the alignment problem is located in the space of problems humans can solve.
Persons: , Sam Altman, Ilya Sutskever, Altman, Sutskever, he's, Jan Leike, Leike, OpenAI, superalignment, we're, Google's DeepMind, OpenAI — Organizations: Service, Business, OpenAI Locations: OpenAI
Read previewTwo OpenAI employees who worked on safety and governance recently resigned from the company behind ChatGPT. Daniel Kokotajlo left last month and William Saunders departed OpenAI in February. Kokotajlo, who worked on the governance team, is listed as an adversarial tester of GPT-4, which was launched in March last year. OpenAI also parted ways with researchers Leopold Aschenbrenner and Pavel Izmailov, according to another report by The Information last month. OpenAI, Kokotajlo, and Saunders did not respond to requests for comment from Business Insider.
Persons: , Daniel Kokotajlo, William Saunders, Saunders, Kokotajlo, overton, Ilya Sutskever, Jan Leike, AGI, It's, Sam Altman, Diane Yoon, Chris Clark, Yoon, Clark, OpenAI, Leopold Aschenbrenner, Pavel Izmailov Organizations: Service, Business, Alignment Locations: OpenAI
As Open AI employees celebrated the return of CEO Sam Altman with a five-alarm office party , OpenAI software engineer Steven Heidel was busy publicly rebuffing overtures from Salesforce CEO Marc Benioff. Heidel was one of more than 700 OpenAI employees who's threatened exodus halted a would-be mutiny at one of Silicon Valley's most important AI companies. He was previously a scientist at Facebook AI Research and worked as a member of Google Brain under supervision of Prof. Geoffrey Hinton and Ilya Sutskever. Alec Radford: Radford was hired in 2016 from a small AI company he founded in his dorm room. Tao Xu : technical staff, worked on GPT4 and WhisperChristine McLeavey : technical staff, with contributions to music-related productsChristina Kim : technical staffChristopher Hesse : technical staffHeewoo Jun : technical staff, researchAlex Nichol : technical staff, researchWilliam Fedus: technical staff, researchIlge Akkaya: technical staff, researchVineet Kosaraju : technical staff, researchHenrique Ponde de Oliveira Pinto : technical staffAditya Ramesh : technical staff, developed DALL-E and DALL-E 2Prafulla Dhariwal : research scientistHunter Lightman : technical staffHarrison Edwards : research scientistYura Burda : machine language researcherTyna Eloundou : technical staff, researchPamela Mishkin : researcherCasey Chu : researcherDavid Dohan : technical staff, researchAidan Clark : researcherRaul Puri : research scientistLeo Gao : technical staff, researchYang Song : technical staff, researchGiambattista ParascandoloTodor Markov : Machine learning researcherNick Ryder : technical staff
Persons: Sam Altman, Steven Heidel, Marc Benioff, Heidel, Altman, Mira Murati, Murati, Brad Lightcap, Lightcap, Jason Kwon, Kwon, Wojciech Zaremba, Geoffrey Hinton, Ilya Sutskever, Alec Radford, Radford, OpenAI, Peter Welinder, He's, Github Copilot, Anna Makanju, Andrej Karpathy, OpenAI's, Michael Petrov, Petrov, Greg [ Brockman, Miles Brundage, Brundage, John Schulman OpenAI, Srinivas Narayanan, Scott Grey, Grey, Bob McGrew, Research Che Chang, Lillian Weng, Safety Systems Mark Chen, Frontiers Research Barret Zoph, Peter Deng, Jan Leike Evan Morikawa Steven Heidel Jong Wook Kim, Tao Xu, Christine McLeavey, Christina Kim, Christopher Hesse, Heewoo, Alex Nichol, William Fedus, Henrique Ponde de Oliveira Pinto, Aditya Ramesh, Hunter Lightman, Harrison Edwards, Yura, Tyna, Pamela Mishkin, Casey Chu, David Dohan, Aidan Clark, Raul Puri, Leo Gao, Yang, Giambattista Parascandolo Todor Markov, Nick Ryder Organizations: Business, BI, OpenAI, Khosla Ventures, Facebook, Research, Google, Tesla, U.S . Department of Energy, Oxford University, Safety Systems, Frontiers Research Locations: Albania, Canada, OpenAI
At least two-thirds of OpenAI staff have threatened to quit and join Sam Altman at Microsoft. It follows days of chaos at OpenAI after CEO Sam Altman was fired in a shock move. AdvertisementNearly 500 OpenAI staff have threatened to quit unless all current board members resign and ex-CEO Sam Altman is reappointed. Late on Sunday, Microsoft CEO Satya Nadella announced that Altman and former OpenAI president Greg Brockman would be joining a new AI team at Microsoft, after efforts by investors and current employees to bring him back as OpenAI CEO fell apart. OpenAI and Microsoft did not immediately respond to a request for comment from Business Insider, made outside normal working hours.
Persons: Sam Altman, , Mira Murati, Brad Lightcap, Altman, Kara Swisher, Ilya Sutskever, Jan Leike, Murati, Satya Nadella, Greg Brockman, OpenAI, Emmett Shear, Twitch Organizations: Microsoft, Service, Wired, Sutskever, Business
OpenAI employees are having a tough time after Sam Altman was suddenly ousted from the company. Here's what OpenAI employees are saying about the chaotic transition. Since Friday the company has cycled through three CEOs: cofounder Sam Altman, former CTO Mira Murati, and current CEO Emmett Shear, who cofounded Twitch. AdvertisementMore OpenAI staffers have threatened to join them unless Altman is reinstated and the board resigns. Throughout the night on Sunday, more OpenAI staffers spoke out, sharing the repeated message: "OpenAI is nothing without its people."
Persons: Sam Altman, Altman, , Mira Murati, Emmett Shear, Greg Brockman, Aleksander Madry, Employees haven't, cofounders Brockman, Will Depue, Sam, Greg, Ilya Sutskever, Shengjia Zhao, Ilya, Jan Leike, Andrej Karpathy Organizations: Service, Microsoft, Employees, Business, Sutskever, OpenAI
OpenAI wants to lure Google researchers with $10 million pay packets, The Information reported. OpenAI is in talks for another employee share sale this year that could value it at $86 billion. OpenAI is exploring options for an employee share sale that values the company at $86 billion , Bloomberg reported last month. If its recruiters are successful in enticing top Google AI researchers, they could benefit from compensation packages of between $5 million and $10 million after the latest share sale, according to The Information. Five former Google researchers were listed in the acknowledgments section of OpenAI's blog post announcing the launch of ChatGPT last November.
Persons: OpenAI, , ChatGPT, OpenAI's, Jan Leike, Leiki, OpenAI didn't Organizations: Google, Service, Bloomberg, Meta
He is looking for research engineers, scientists, and managers. Working closely with research engineers, research scientists are responsible for advancing OpenAI's alignment research agenda. The research manager position oversees the research engineers and research scientists. An ideal candidate for the leadership role, Leike said, would have a combination of management experience and machine learning skills. OpenAI isn't just hiring for its superalignment team.
Persons: Jan Leike, , we'll, Leike, OpenAI's, OpenAI Organizations: OpenAI, Research Locations: OpenAI
Elon Musk and Sam Altman are racing to create superintelligent AI. Musk said xAI plans to use Twitter data to train a "maximally curious" and "truth-seeking" superintelligence. Elon Musk is throwing out challenge after challenge to tech CEOs — while he wants to physically fight Meta's Mark Zuckerberg, he's now racing with OpenAI to create AI smarter than humans. On Saturday, Musk said on Twitter Spaces that his new company, xAI, is "definitely in competition" with OpenAI. Over a 100-minute discussion that drew over 1.6 million listeners, Musk explained his plan for xAI to use Twitter data to train superintelligent AI that is "maximally curious" and "truth-seeking."
Persons: Elon Musk, Sam Altman, Musk, Mark Zuckerberg, , Meta's Mark Zuckerberg, he's, OpenAI, Altman, Semafor, Ilya Sutskever, Jan Leike, Sam Altman — Organizations: Twitter, Intelligence
OpenAI fears that superintelligent AI could lead to human extinction. It is putting together a team to ensure that superintelligent AI aligns with human interests. The new team — called Superalignment — plans to develop AI with human-level intelligence that can supervise superintelligent AI within the next four years. OpenAI CEO Sam Altman has long been calling for regulators to address AI risk as a global priority. To be sure, not everyone shares OpenAI's concerns about future problems posed by superintelligent AI.
Persons: OpenAI, Ilya Sutskever, Jan Leike, Sam Altman, Altman, Elon Musk Organizations: superintelligent, Research
"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." Superintelligent AI - systems more intelligent than humans - could arrive this decade, the blog post's authors predicted. The team's goal is to create a "human-level" AI alignment researcher, and then scale it through vast amounts of compute power. OpenAI says that means they will train AI systems using human feedback, train AI systems to assistant human evaluation, and then finally train AI systems to actually do the alignment research. AI safety advocate Connor Leahy said the plan was fundamentally flawed because the initial human-level AI could run amok and wreak havoc before it could be compelled to solve AI safety problems.
Persons: OpenAI, Ilya Sutskever, Jan Leike, Connor Leahy, Anna Tong, Kenneth Li, Rosalba O'Brien Organizations: Microsoft, Reuters, Thomson Locations: San Francisco
How to Spot Robots in a World of A.I.-Generated Text
  + stars: | 2023-02-17 | by ( Keith Collins | ) www.nytimes.com   time to read: +9 min
A detection tool that knew which words were on the special list would be able to tell the difference between generated text and text written by a person. That would be especially helpful for this generated text, as it includes several factual inaccuracies. By contrast, the detection tool OpenAI released requires a minimum of 1,000 characters. A person could repeatedly edit generated text and check it against a detection tool until the text is identified as human-written — and that process could potentially be automated. By that time, educators and researchers had already been calling for tools to help them identify generated text.
Sam Altman, CEO of OpenAI, walks from lunch during the Allen & Company Sun Valley Conference on July 6, 2022, in Sun Valley, Idaho. Artificial intelligence research startup OpenAI on Tuesday introduced a tool that's designed to figure out if text is human-generated or written by a computer. The release comes two months after OpenAI captured the public's attention when it introduced ChatGPT, a chatbot that generates text that might seem to have been written by a person in response to a person's prompt. "In our evaluations on a 'challenge set' of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as 'likely AI-written,' while incorrectly labeling human-written text as AI-written 9% of the time (false positives)," the OpenAI employees wrote. The new version is more prepared to handle text from recent AI systems, the employees wrote.
CNN —Two months after OpenAI unnerved some educators with the public release of ChatGPT, an AI chatbot that can help students and professionals generate shockingly convincing essays, the company is unveiling a new tool to help teachers adapt. OpenAI on Tuesday announced a new feature, called an “AI text classifier,” that allows users to check if an essay was written by a human or AI. Public schools in New York City and Seattle have already banned students and teachers from using ChatGPT on the district’s networks and devices. OpenAI now joins a small but growing list of efforts to help educators detect when a written work is generated by ChatGPT. Some companies such as Turnitin are actively working on ChatGPT plagiarism detection tools that could help teachers identify when assignments are written by the tool.
Total: 23